2 research outputs found
EF/CF: High Performance Smart Contract Fuzzing for Exploit Generation
Smart contracts are increasingly being used to manage large numbers of
high-value cryptocurrency accounts. There is a strong demand for automated,
efficient, and comprehensive methods to detect security vulnerabilities in a
given contract. While the literature features a plethora of analysis methods
for smart contracts, the existing proposals do not address the increasing
complexity of contracts. Existing analysis tools suffer from false alarms and
missed bugs in today's smart contracts that are increasingly defined by
complexity and interdependencies. To scale accurate analysis to modern smart
contracts, we introduce EF/CF, a high-performance fuzzer for Ethereum smart
contracts. In contrast to previous work, EF/CF efficiently and accurately
models complex smart contract interactions, such as reentrancy and
cross-contract interactions, at a very high fuzzing throughput rate. To achieve
this, EF/CF transpiles smart contract bytecode into native C++ code, thereby
enabling the reuse of existing, optimized fuzzing toolchains. Furthermore,
EF/CF increases fuzzing efficiency by employing a structure-aware mutation
engine for smart contract transaction sequences and using a contract's ABI to
generate valid transaction inputs. In a comprehensive evaluation, we show that
EF/CF scales better -- without compromising accuracy -- to complex contracts
compared to state-of-the-art approaches, including other fuzzers,
symbolic/concolic execution, and hybrid approaches. Moreover, we show that
EF/CF can automatically generate transaction sequences that exploit reentrancy
bugs to steal Ether.Comment: To be published at Euro S&P 202
Adversarial Edit Attacks for Tree Data
Paaßen B. Adversarial Edit Attacks for Tree Data. In: Yin H, Camacho D, Tino P, eds. Proceedings of the 20th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL 2019). Lecture Notes in Computer Science. Vol 11871. Cham: Springer; 2019: 359-366.Many machine learning models can be attacked with adversarial examples, i.e. inputs close to correctly classified examples that are classified incorrectly. However, most research on adversarial attacks to date is limited to vectorial data, in particular image data. In this contribution, we extend the field by introducing adversarial edit attacks for tree-structured data with potential applications in medicine and automated program analysis. Our approach solely relies on the tree edit distance and a logarithmic number of black-box queries to the attacked classifier without any need for gradient information. We evaluate our approach on two programming and two biomedical data sets and show that many established tree classifiers, like tree-kernel-SVMs and recursive neural networks, can be attacked effectively